Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Business Response Survey

BRS 2022 Technical Notes

You can send comments or questions to the Business Response Survey (BRS) staff by email.

Data Collection

Data for the 2022 Business Response Survey (BRS) were collected from August 1 through September 30, 2022. The BRS relied on the existing data collection platform of the BLS Quarterly Census of Employment and Wages (QCEW) program’s Annual Refiling Survey (ARS). BRS survey responses were solicited via email and printed letters. Responses were collected online using the ARS platform. This allows for a large, nationally representative sample to be surveyed at minimal cost.

Definitions

Establishments. An individual establishment is generally defined as a single physical location at which one, or predominantly one, type of economic activity is conducted. Most employers covered under the state Unemployment Insurance (UI) laws operate only one place of business.

Industry. The BRS uses the 2017 North American Industry Classification System (NAICS). NAICS is the standard used by federal statistical agencies in classifying business establishments for the purpose of collecting, analyzing, and publishing statistical data on industry. In NAICS, each establishment is assigned a 6-digit code to identify its industry at the finest breakout available in the classification system. However, for publication purposes, data are often summarized at the 2-digit level to represent industry sectors.

Large/small establishments. For these data, establishments with 2021 annual average employment of 1-499 are considered small, and greater than 499 are considered large.

Telework. This is a work arrangement that allows an employee to work at home, or from another remote location, by using the internet or a computer linked to one’s place of employment, as well as digital communications such as email and phone.

New hires. This includes any new employees hired in July 2022, even if they had not formally started working in July 2022 or had left the position since being hired. Establishments were instructed to report only hires for their specific location and not any other location of the same company.

Job vacancies. This includes any paid position, new or unfilled, that the establishment is taking active steps to recruit or hire for at the time of data collection. For positions with multiple vacancies, establishments were instructed to report the number of candidates they would be willing to hire for that position.

Questionnaire

The BRS asked 22 questions spanning three topic areas:

  • Telework
  • New hires in July 2022
  • Current vacancies and vacancies in the last 12 months

The survey included the following question types:

  • Yes/No, select only one (screener)
  • Multiple Choice, select only one
  • Multiple Choice, select all that apply
  • Free Text Entry

The yes/no questions served as screener questions. If a respondent answered no to a screener, the questionnaire instructed the respondent to jump ahead to a specified question, thereby skipping the questions in between. However, the survey instrument did not enforce the skip.

A couple of questions listed three or more response options with the respondent allowed to select only one of the options. A few questions listed numerous response options with the respondent being instructed to select all that apply.

Numerous survey questions asked the respondent to provide a quantitative response by entering a value in a box allowing free text entry. However, the survey instrument did not enforce what kind of text was allowed to be entered. Consequently, responses to these survey questions included numerical entries, alpha characters, special characters, and combinations thereof.

Data Editing

Because of the nature of the 2022 BRS questionnaire, a fair amount of data editing was required prior to estimation. Data edits included:

  • Inferred responses for questions skipped based on a previous screener
  • Free text entry editing
  • Within-question consistency edits
  • Across-question consistency edits
  • Adjustments for Don’t Know and Not Applicable (N/A) responses

Within each of the three 2022 BRS topic areas (telework, new hires, vacancies), there was at least one screener question that prompted respondents answering “No” to skip one or more subsequent questions within that topic area. In each case, answers to the skipped questions were inferred. For purposely skipped questions asking for quantitative responses, inferred answers were set to zero. For other types of purposely skipped questions, inferred answers were set to categories like “None” or “Not Applicable.”

The survey questions that asked for quantitative answers via free text entry yielded responses that included numeric, alpha, or special characters—or a combination thereof. This led to the need for some data to be cleaned. Data cleaning included text editing procedures such as parsing text entries into separate words, removing blanks and special characters, converting spelled-out numbers to actual numbers, converting fractions to decimals, converting ranges of numbers to midpoints, distinguishing periods meant as decimal points from periods used for other purposes, and standardizing various spellings and misspellings of commonly reported words. Non-missing responses from which no usable numerical value could be extracted were categorized as “Don’t Know” responses. They were counted as responses for response rate purposes, but the responses did not contribute to estimation other than through the adjustment of sampling weights to account for them. Once all edits were completed for the quantitative answer questions, usable numerical responses were binned into categories. Specifically, for numerical data, the bins were 0, 1-2, 3-5, 6-10, 11-19, and 20+. For some questions, estimates for each bin are published, whereas for others, estimates for only the 0 and/or 1+ categories are published.

There were several questions that asked respondents to check all response options that apply. These questions included either a “None of the Above” or an N/A response option. For these questions, it was possible for respondents to incorrectly select one of these options in addition to one or more other options. In these instances, survey responses were edited by effectively deselecting the “None of the Above” or N/A option. 

Within survey topic areas, there were some logical relationships between questions that triggered a need to assess respondent-level answer consistency across questions. For example, question 5 asked if the establishment had any hires in July 2022, question 6 asked how many new employees were hired in July 2022, and question 7 asked how many of these new hires will telework all the time. If a respondent answered “Yes” to question 5 and “10” to question 6, then the respondent’s answer to question 7 should have been less than or equal to ten. If this was not the case, the respondent’s answers to both question 6 and 7 were edited to “Don’t Know.” There were many other ways for a respondent’s answers to exhibit inconsistency across questions. When possible, these inconsistencies were resolved via data editing on a scenario-by-scenario basis. When resolution was not possible, the affected responses were ultimately treated as “Don’t Know.”

In some cases, the logical relationship between questions enabled further text editing that helped responses go from being unusable to being usable. For example, if a respondent answered “Yes” to question 5, answered “10” to question 6, and answered “All” to question 7, the question 7 response was edited to “10.”

After these data edits were implemented, some of the 22 questions in the 2022 BRS included “Don’t Know” and/or N/A responses, with some being explicitly listed response options in the survey questionnaire and others being inferred responses resulting from data editing procedures. Estimates were produced by treating the "Don't Know" responses as non-respondents, thereby producing estimates for only the non-"Don't Know" response options after adjusting for the "Don't know" responses. Typically, for questions with N/A response options, estimates were produced that treated N/A responses as though the respondent answered “none” and, therefore, estimates for the non-N/A response options were not adjusted for the N/A responses. The one exception was question 4. For that question, estimates were produced for only non-N/A response options after adjusting for the N/A responses due to the specific nature of the question.

Although there are similarities between the treatment of “Don’t Know” response options and N/A response options, there is one fundamental difference. When adjusting non-“Don’t Know” responses for “Don’t Know” responses, the adjusted estimates pertain to the full original universe of inference. However, when adjusting non-N/A responses for N/A responses, the adjusted estimates pertain to only the applicable subset of the original universe of inference. For estimates that involve these N/A adjustments, population counts of the relevant redefined universes of inference are not known and can only be estimated based on survey results.

Question 2 caused some unique data editing challenges. Respondents were asked to provide three quantitative responses via free text entry to the question  “In a typical week, what percent of your employees CURRENTLY telework in the following amounts? Answers should total 100%.” Options provided were:

  • All the time (remote employee)
  • Some of the time (some work hours or days via telework)
  • Rarely or never (rare occasions of telework, or full-time on-site)

This question is a repeat of a question asked in the 2021 BRS, but the response mechanism changed from 2021 to 2022. In the 2021 BRS, respondents selected percentages from a drop-down list that offered a limited set of values to choose from (0%, 5%, 10%, 20%…, 80%, 90%, 95%, 100%). In the 2022 BRS, respondents provided values via free text entry. As described earlier, survey questions that utilized free text entry responses yielded data that required editing. Question 2 was no exception, and the free text data edits discussed earlier were also applied to question 2. However, there were some additional edits applied to question 2 that were not applied to the other free text entry questions.

First, in the 2021 BRS, responses to this question were restructured for estimation purposes. In the 2022 BRS, the restructuring was maintained in large part to facilitate the comparability of results across the two surveys. Specifically, the restructured categories were:

  • Establishments where all employees telework all the time
  • Establishments where one or more employees telework some (but not all) of the time
  • Establishments where all employees telework rarely or never

Next, in the 2021 BRS, many respondents provided answers of 0% to all three subparts of this question. Extensive research showed that an overwhelming majority of these cases translated to establishments that offered no telework. The research involved matching the 2021 BRS cases to telework information collected in the 2020 BRS, as well as a follow-up survey conducted on a subset of the 2021 BRS cases that answered 0% to all three subparts of the question. The same phenomenon of respondents providing answers of 0% to all three subparts of this question carried over to 2022 BRS question 2. Consequently, whenever a respondent gave a triple zero response to the three subparts of question 2 in the 2022 BRS, the result was edited by assigning the respondent to the “establishments where all employees telework rarely or never” category.

Last, for some responses, the sum of the percentages given for the three subparts did not sum to 100. Other respondents answered only one or two of the three subparts. Some further response editing was performed to address these issues when there was enough conclusive information to make reasonable edits. In some cases, conclusive information came in the form of alphabetic text information. For example, if a respondent entered “some telework” to the appropriate question subpart and there was no conflicting information in the other subparts, the respondent was put into the “establishments with employees teleworking some of the time” category.

Sample Design and Selection Procedures

For the 2022 BRS, BLS selected a stratified sample of approximately 340,000 establishments from a universe of over 8.9 million private-sector establishments. The original sample that was selected consisted of about 319,000 establishments. An additional sample of 21,000 establishments for which we had email addresses was selected to study the impact of adding email addresses to the sampling criteria. Establishments initially contacted via email are less costly to collect.

The universe source for the 2022 BRS sample was the set of establishments from the 2021 fourth quarter BLS Business Register that were identified as in-scope for this survey. The BLS Business Register is a comprehensive quarterly business name and address file of employers subject to state Unemployment Insurance (UI) laws. It is sourced from data gathered by the QCEW program. Each quarter, QCEW employment and wage information is collected and summarized at various levels of geography and industry. Geographic breakouts include county, Metropolitan Statistical Area (MSA), state, and national. Industry breakouts are available at the detailed 3-, 4-, 5- and 6-digit NAICS levels, as well as at the industry sector level and at the higher-level goods and services producing designations. The QCEW covers all 50 states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. The primary sources of data for these 53 entities are the Quarterly Contributions Reports (QCRs) submitted to State Workforce Agencies (SWAs) by employers subject to state UI laws. The QCEW program also gathers separately sourced data for federal government employees covered by the Unemployment Compensation for Federal Employees (UCFE) program. There were about 11 million establishments on the 2021 fourth quarter BLS Business Register, which served as the source of the BRS’s sampling universe. However, about 2 million of these establishments were determined to be out-of-scope for the survey. Establishments that were excluded from the universe:

  • Public Administration and Government (NAICS 92)
  • Private Households (NAICS 814110)
  • U.S. Postal Service (NAICS 491110)
  • Services for the Elderly and Disabled Persons (NAICS 624120) with establishment size (i.e., employment count) = 1
  • Unclassified Accounts (NAICS 999999)

The 2022 BRS leveraged the technical and collection infrastructure of the ARS. While the synchronization of the two surveys was efficient, it created a need to adapt the BRS sample in accordance with some of the constraints imposed on the ARS sample. Establishments with one to three employees are never administered the ARS, and roughly one-third of the establishments eligible for the ARS are randomly chosen to be administered the ARS in any given year. During BRS sample selection, active ARS eligible establishments and ARS ineligible establishments were “selectable,” whereas inactive ARS eligible establishments were disallowed from selection, in part as a means of managing respondent burden over time.

To integrate the BRS sample into the ARS framework, each establishment in the BRS sampling universe was categorized into one of the following groups:

  • ARS eligible establishments – active for this year’s ARS (BRS selectable)
  • ARS eligible establishments – inactive for this year’s ARS (BRS not selectable)
  • ARS ineligible establishments (BRS selectable)

Each BRS sampling stratum consisted of establishments from one or more of the groups above. Within strata containing only active ARS eligible establishments or only ARS ineligible establishments, sample selection proceeded with no restrictions using simple random sampling. Strata containing only inactive ARS eligible establishments were imputed because there were no selectable establishments and, therefore, no survey results. For any stratum containing a mix of ARS eligible and ARS ineligible establishments, stratum sample sizes were allocated proportionately to each sub-population. Within the stratum’s ARS ineligible sub-population, sample selection proceeded with no restrictions using simple random sampling. Within the stratum’s ARS eligible sub-population, sample selection proceeded by taking a simple random sample from the active/selectable establishments.

Note that for any stratum containing both active ARS eligible and inactive ARS eligible establishments, the sample was selected from only the active portion of the stratum. This selection was still considered to be representative of all ARS eligible establishments in the stratum, regardless of active/inactive status since the determination of ARS active/inactive status was random. Because of this, and because stratum sample sizes were proportionately allocated to eligible/ineligible sub-populations, sample units were equally weighted within (but not across) strata and survey question combinations.

When designing the survey and determining sample sizes, BLS researchers, analysts, and methodologists collaborated to identify the key research goals. As part of this process, a balance had to be struck between producing precise estimates for various establishment aggregations and the costs associated with fielding a sample that could deliver on those goals. Based on the types of administrative data available for establishments on the BLS Business Register and based on the team’s experience analyzing similar establishment-based surveys, research goals centered on creating survey estimates for different combinations of establishment geography, industry type, and/or establishment size. This motivated the decision to choose a design that stratified on all three factors. A decision was then made to define granular strata to keep the strata homogeneous and to facilitate the construction of a wide array of broader composite estimates as functions of the more narrowly defined strata estimates.

In the 2022 BRS, strata were defined jointly on the following factors:

  • State {All states plus the District of Columbia, Puerto Rico, and the U.S. Virgin Islands}
  • Industry type, based primarily on 2-digit NAICS {11-21, 22, 23, 31-33, 42, 44-45, 48-49*, 4811, 484, 51, 52-53, 54-56, 61, 62*, 71, 72, 81}
  • Establishment size, based on employment {1-4, 5-9, 10-19, 20-49, 50-99, 100-249, 250-499, 500-999, 1000+}

In the industry type list above, industry grouping 48-49* excludes industries with NAICS classifications of 484 and 4811. Industry 62* excludes industries with a NAICS classification of 624120 that also have an establishment size of one.

In the establishment size list above, all nine “narrow” size groupings are given. Some BRS analyses were conducted using two other broader establishment size groupings – a “medium-width” grouping and a “broad” (or large/small) grouping. The medium-width size classes were 1-19, 20-99, 100-499, and 500+. The large/small groupings were 0-499 and 500+.

BLS researchers and analysts identified specific size class, state–industry, state–size class, and industry–size class strata aggregations as the key levels at which to produce estimates to a certain degree of precision while still being realistic about survey costs and burden. These aggregations were used to drive sample size determination. Specifically, they were:

  • State by Goods-Producing/Services-Producing industry type category {52*2 = 104 estimation cells}
  • State by medium-width establishment size category {52*4 = 208 estimation cells}
  • Modified NAICS sector by medium-width establishment size category {15*4 = 60 estimation cells}
  • Narrow establishment size category {9 estimation cells}

Research interest was not, and is not, limited to these aggregations. However, because these were the aggregates initially identified as the most important ones, the sample was designed to achieve a desired precision when estimating specifically for these groupings. Alternatively, the sample was not designed to achieve a desired precision when estimating for other groupings, although in some cases the desired precision was achieved anyway. Note that researchers were certainly interested in estimating with precision at broader levels such as national, state, modified NAICS sector, and narrow size class. But it was easy to see that a sample that allowed for the generation of precise estimates for the four aggregates listed above would certainly allow for the generation of precise estimates for these broader level aggregates. 

For each estimation cell within each of the four key aggregates listed above, sample sufficiency counts were determined based on estimating proportions to an agreed upon degree of precision. The formula for the sample sufficiency of an estimation cell was based on the deconstruction of the formula for the variance of a proportion (using simple random sampling within the cell). Estimation cell sample sufficiency counts were then allocated proportionately to all strata within each cell. The result was a set of four “allocated sufficiency counts” per stratum. For each stratum, the maximum of the four sufficiency counts was chosen. Each stratum’s chosen sufficiency count was then divided by an estimated survey response rate to derive a stratum sample size. If the chosen value exceeded the number of selectable establishments in a stratum, the stratum’s final sample size was set equal to its number of selectable establishments. In that case, the truncated sample size was reallocated to other strata mapping to the same estimation cell. Once sample sizes were finalized, samples were selected within each stratum as described earlier. 

Point Estimation

For the 2022 BRS, the main survey measures of interest included:

  • Proportion of establishments possessing an attribute
  • Number of establishments possessing an attribute
  • Proportion of employees working at establishments that possess an attribute
  • Number of employees working at establishments that possess an attribute

Each measure was estimated within each stratum, provided the stratum included at least one usable response. Strata estimates were then combined to derive composite estimates for various analysis aggregations, such as national estimates, state estimates, and NAICS sector estimates.

For estimation methodology purposes, the primary measure of interest was the estimated proportion of establishments possessing an attribute being assessed by a survey question, such as the proportion of establishments that expect the amount of time that employees are allowed to telework to increase (in the next six months from the time they completed the survey). The other estimates were then calculated as functions of these proportions.

Specifically, within-stratum establishment count estimates were calculated as the product of the stratum’s establishment proportion estimate and the stratum’s total establishment population. Similarly, within-stratum employment count estimates were calculated as the product of the stratum’s establishment proportion estimate and the stratum’s total employment.

Within each stratum, for a particular survey question, establishment proportion estimates were calculated over the sample units that:

  • Responded to at least 4 of the 22 survey questions
  • Responded to the survey question with something other than a response of “Don’t Know”
  • Responded to the survey question with something other than a response of “Not Applicable”

When estimating stratum-level establishment and employment counts, sample unit weights were adjusted upward to account for both unit and item nonresponse. For these purposes, “Don’t Know” responses were treated as item nonresponse.

Final composite estimation was achieved in stages:

  • Direct strata estimation (for strata with at least one usable response)
  • Preliminary composite estimation (for strata with at least one usable response)
  • Strata imputation (for strata with no usable response)
  • Final composite estimation (incorporated direct and imputed strata estimates)

Direct strata estimation was conducted for strata and survey questions for which there was at least one usable response. From these strata-level results, preliminary (i.e., first-pass) composite estimates were produced for establishment proportions (and their variances) for various aggregations of strata, such as national, state, and NAICS sector.

Composite establishment proportion estimates were calculated as weighted sums of strata establishment proportion estimates. Composite estimation weights (i.e., strata weights) were calculated as each stratum’s establishment population proportion relative to the total number of establishments in the composite.

During preliminary composite estimation, the weighted sum was taken over only those strata for which direct strata estimates could be calculated. Therefore, strata weights were adjusted to account for only those strata contributing to a particular preliminary composite estimate.

Preliminary composite estimates for establishment proportions (and their variances) were then used to impute missing strata-level establishment proportions (and their variances). These imputed strata-level establishment proportions were then used to calculate strata-level estimates for establishment and employment counts by multiplying the imputed proportions by their corresponding stratum establishment and employment count populations, respectively.

Lastly, final composite estimation was run using direct strata estimates where possible and imputed strata estimates where necessary. As was the case during preliminary composite estimation, final composite proportion estimates were calculated as weighted sums of strata establishment proportion estimates. However, during final composite estimation, all strata contained values (either directly calculated or imputed); therefore, strata weights no longer needed to be adjusted for missing strata.

Final composite estimation of establishment and employment counts were calculated as unweighted sums of the relevant strata estimates.

Final composite estimates of employment proportions were calculated as weighted sums of strata establishment proportions, where strata weights were calculated as each stratum’s total employment proportion relative to the total employment in the composite.

Variance Estimation

Variance estimates were calculated for the following survey measures of interest:

  • Proportion of establishments possessing an attribute
  • Number of establishments possessing an attribute
  • Proportion of employees working at establishments that possess an attribute
  • Number of employees working at establishments that possess an attribute

Each variance was estimated within each stratum, provided the stratum included at least one usable response. Strata variance estimates were then combined to derive composite variance estimates for various analysis aggregations, such as national estimates and state estimates.

For variance estimation methodology purposes, the primary variance of interest was the estimated variance of the proportion of establishments possessing an attribute being assessed by a survey question.

Variance estimation for establishment proportions involved (1) the application of the basic formula for the variance of a proportion drawn from a simple random sample and (2) the application of the general formula for the variance of a composite proportion estimator drawn from a stratified random sample. More specifically, regarding (2), the composite variance estimator used for establishment proportions was the sum of the product of each stratum’s relevant variance estimate and the square of its stratum weight, where the sum is taken over all strata in the composite.

For any stratum in which every establishment in the universe was sampled and provided a usable response to a particular question, the stratum variance was set to zero. Otherwise, for any stratum with more than one establishment in the universe but only one or two item responses for a particular survey question, the stratum variance was set to a default value. This was done to avoid setting these variances equal to zero, which could contribute to underestimating composite variance estimates. The default value was equivalent to the variance that would have been realized if the stratum had two responses, with one responding in the affirmative to the attribute being analyzed and the other responding in the negative. The same default variance was assigned in strata that had to be imputed.

Stratum-level variance estimates for establishment and employment counts were calculated as functions of the corresponding stratum-level establishment proportion variance estimates. For example, because each stratum-level establishment count estimate was calculated as the product of the stratum-level establishment proportion estimate and the stratum’s total establishment population, the stratum-level establishment count variance estimate was set equal to the stratum-level establishment proportion variance estimate times the square of the stratum’s total establishment population. Stratum-level employment count variance estimates followed the same formulation, except strata employment counts were used instead of strata establishment counts.

Stratum-level variance estimates for employment proportions were set equal to the stratum-level variance estimates for establishment proportions, since employment proportions themselves were set equal to the directly calculated establishment proportions. Composite variance estimates for employment proportions were calculated using the same formula as for composite variance estimates for establishment proportions, except using employment-based strata weights instead of establishment-based strata weights.

Preliminary composite variance estimates were subject to the same strata weight adjustments as were preliminary composite proportion estimates. Similarly, final composite variance estimates were calculated using unadjusted strata weights because, at that point, all strata had either direct stratum-level variance estimates or imputed stratum-level variance estimates (i.e., there were no missing variance estimates).

Note that although preliminary composite variance estimates were calculated, they were not used during strata imputation for the 2022 BRS. Instead, as mentioned earlier, imputed strata variances were set according to the aforementioned default formula.

Non-Response Adjustments

In the 2022 BRS, a respondent that answered at least 4 of the 22 survey questions was considered a usable survey respondent. Otherwise, a respondent was considered a unit non-respondent. Item nonresponse refers to a respondent's failure to answer a specific question. 

The sample design/estimation strategy was to select independent samples within survey strata and then to calculate composite estimates by aggregating across strata results. The sample design stratified on three variables – state, modified NAICS sector, and narrow size class – yielding 53x17x9=8,109 possible survey strata. However, 10.6 percent (856) of the possible survey strata contained no establishments. Of the 7,253 non-empty strata, 8.6 percent (626) contained no selectable establishments due to constraints associated with synchronizing the sample with the ARS. Of the 6,627 non-empty strata containing at least one selectable establishment, a sample of at least one establishment was drawn. Of these 6,627 strata, 16.4 percent (1,089) had no usable survey respondents, leaving 5,538 strata with at least one usable survey respondent.  

Skipped questions created situations where a stratum had at least one usable response for one question but no usable responses for another question. For example, for question 5, there were 32 strata that had at least one usable survey respondent but no usable item responses to the specific question.

As listed and described in the Point Estimation section, to accommodate strata with no usable item responses, final composite estimation was achieved in four stages, including strata imputation. During strata imputation, survey strata and question combinations that had no usable item responses had their establishment proportions and variances imputed according to the following hierarchy of composite estimates, ordered from highest priority composite to lowest priority composite going down the list:

  • State, modified NAICS sector, medium-width size class (1-19, 20-99, 100-499, 500+)
  • State, modified NAICS sector, size class large/small (1-499, 500+)
  • State, NAICS goods/services (G, S), size class large/small
  • Census division, NAICS goods/services, size class large/small
  • Census region, NAICS goods/services, size class large/small
  • Narrow size class (1-4, 5-9, 10-19, 20-49, 50-99, 100-249, 250-499, 500-999, 1000+)

For example, for a particular survey question, suppose a state had no usable responses for modified NAICS sector 11-21 and size class 1,000+. Further, suppose that for the same question and state there were multiple responses for modified NAICS sector 11-21 and size class 500-999. In this case, there would be a viable composite estimate for the stratum’s corresponding state, sector, and medium-width size class composite cell. Therefore, the stratum’s establishment proportion and establishment proportion variance would get imputed from that first composite in the hierarchy.

As another example, for a particular survey question, suppose a state had no usable responses for modified NAICS sector 11-21 for both size classes 500-999 and 1,000+. Further, suppose that for the same question and state there were multiple responses for modified NAICS sectors 22, 23, and 31-33 for size class 500+. In this case, the first composite in the imputation hierarchy would prove inadequate. However, since modified NAICS sectors 11-21, 22, 23, and 31-33 are all categorized as goods-producing industries, the third composite down the priority list would yield a viable composite estimate and, therefore, would be used for imputation for the stratum.

It is worth noting that the lowest-prioritized composite in the imputation hierarchy – the narrow size class composite – is the fail-safe since composite estimates existed for all nine size classes for every question.

The previous discussion details strata-level response. Within each stratum, for a given survey question, all usable survey respondents that answered the question were assigned the same sample unit weight. For estimates that do not redefine the universe of inference, the assigned sample unit weight was set as the strata establishment (or employment) population divided by the number of usable respondents. As such, within stratum sample weights were essentially the original sampling weights adjusted uniformly upwards for unit and item nonresponse as well for the elimination of otherwise usable respondents for things such as “Don’t Know” response adjustments or certain kinds of cross-question conditioning. For estimates that did involve the redefinition of the universe of inference, original sampling weights were adjusted upwards for unit and item nonresponse, but they were not adjusted for the elimination of otherwise usable respondents for N/A response adjustments or certain kinds of cross-question conditioning.

As a matter of good practice, sample unit weights were used in the construction of strata estimates. However, because they were set equal for each question within each stratum, the same proportion estimates could have been achieved without using them. Finally, note that although equal sample unit weighting was used within each question and stratum combination, sample units could and did vary across question and stratum combinations.

Response Rates

The 2022 BRS consisted of 22 questions to which establishments could respond. A survey was considered usable if the respondent answered at least 4 of the 22 questions, including inferred answers where appropriate. Estimates were generated from usable surveys only.

Of the approximately 340,000 sampled establishments, approximately 5,300 were deemed uncollectible prior to fielding the sample. These uncollectible establishments were treated as non-respondents. Typically, these were establishments that changed status between the time the universe was drawn and data collection and could no longer be contacted or could not respond to the survey. Thus, the 2022 BRS was administered to approximately 335,000 establishments.

Of the establishments that were given the opportunity to take the survey, about 91,000 participated to some degree, and approximately 1,000 were not usable (answered fewer than four questions). Thus:

  • Survey participation rate (relative to the full sample) = 26.9%
  • Survey participation rate (relative to the collectible sample) = 27.3%
  • Usable response rate (relative to the full sample) = 26.7%
  • Usable response rate (relative to the collectible sample) = 27.1%
  • Usability rate amongst survey participants = 99.2%

Additional Notes

Unconditional and Conditional Estimates

Unconditional estimates for a particular survey question use only that question’s data during estimation. Typically, this involves the use of edited rather than raw data. It should be noted that a question’s data edits can be affected by responses to other questions. Therefore, unconditional estimates can, in fact, be impacted by the results of other questions. But if a question’s estimate does not directly involve conditioning on another question, it is still considered an unconditional estimate.

Conditional, or cross-question, estimates for a particular survey question explicitly adjust the set of responses used during estimation based on respondents’ specific answers to one or more other survey questions.

For the 2022 BRS, a decision was made to publish only unconditional estimates. Unconditional estimates are often based on data from slightly different sets of respondents from one question to the next because of different item response patterns. In the 2022 BRS, there are instances where the same or similar measures can be gleaned from multiple questions. Because unconditional estimates of separate questions are typically based on different sets of respondents, there are cases where these similar measures produce slightly different estimates across questions.

Estimates of Employment

The estimates of employment represent the total number of employees working at an establishment for which a particular situation occurred for at least one worker. It is not an estimate of the number of employees who experienced the situation. For example, question 1 asks “Do any employees at this location CURRENTLY telework in any amount?” The employment estimate for this question measures the number of employees at establishments at which at least some telework occurs. It is not an estimate of the total number of workers who telework.

Rounding

Estimates of employment and the number of establishments are rounded to the nearest integer. Estimates of percentages are rounded to one decimal place.

Suppressions

Estimates were not released for reasons of confidentiality.  These estimates are noted with ** on the data tables.

Last Modified Date: March 22, 2023

After these data edits were implemented, some of the 22 questions in the 2022 BRS included “Don’t Know” and/or N/A response options, with some being explicitly listed response options in the survey questionnaire and others being inferred response options resulting from data editing procedures.

Estimates were produced by treating the “Don’t Know” responses as non-respondents, thereby producing estimates for only the non-“Don’t Know” response options after adjusting for the “Don’t Know” responses. Typically, for questions with N/A response options, estimates were produced that treated N/A responses as though the respondent answered “none” and, therefore, estimates for the non-N/A response options were not adjusted for the N/A responses.  The one exception was question 4. For that question, estimates were produced for only non-N/A response options after adjusting for the N/A responses due to the specific nature of the question.
{All states plus the District of Columbia, Puerto Rico, and the U.S. Virgin Islands}
{All states plus the District of Columbia, Puerto Rico, and the U.S. Virgin Islands}